Week 6 (July 9-13)

Overview

This week I discovered my squeeznet implementation is supremely poor at detection. I pursued two avenues for improvement, cumulating in attempting to (and not yet succeeding in) training a new squeezenet model from scratch with the ImageNet database.

Monday

This day I solved (somewhat) the SqueezeNet problems - the training and testing scripts run and they complete. However, they give beyond abysmal results (< 1% average precision). I asked to schedule a meet-up tomorrow morning with that fellow student worker I had been exchanging emails with; with hope he will have some insight on how to save SqueezeNet from itself.

Afterwards, I ran five iterations of 20-epoch training sessions of SqueezeNet to see how the result would turn out. They turned out consistantly awful, so clearly this implementation would not work. It was considered that training from scratch on ImageNet (a 1.2 million-image dataset for training neural networks) might be called for.

Tuesday

This day I met with that fellow coworker. They recommended I run the testing script with squeezenet but on the training data to check if it had any accuracy on that - it was about the same as with testing data, which was absurd since the network should have adjusted to fit that set. Training on ImageNet seemed imminent.

Wednesday

This day I tried training squeezenet on the Pascal VOC 2007+2012 datasets collectively- results are about the same. I have begun downloading the ImageNet training dataset to train from scratch squeezenet, but the download was too big to fit on the computer. With hope it can squeeze onto a USB drive, but even that will prove too small to hold it all at once. Truly an information overload.

Thursday

This day I petitioned my predicessor if I could delete a login on the lab computer that was taking up a massive amount of storage space but had nothing of use on it - they did not respond to me this day. Deleting that login would allow me to download ImageNet directly onto the lab computer and not have to use my USB drive as a middleman.

Afterwards, I checked to find ImageNet downloaded 60GB of content onto my USB drive - should be sufficient for training a network. It came in a compressed file form, of course, so I had it extract in-place since it would be too big to extract onto the computer's drive. It took multiple hours but it did finish. This next day I will have tried to run squeezenet or at least alexnet on it over the weekend - squeezenet did not appear to be natively implemented so it was predicted I will have to implement it myself. Additionally, based on the number of images I should expect from the ImageNet website and the known completion time, it will take 5-6 hours to perform one epoch, so it will not be a fast process. All the better it will have been happened on a Friday.

Friday

This day I attended the weekly lab-wide meeting where the director calls together my coworker, two student workers on summer leave, and me so that we may discuss our respective progress over the previous week and troubleshoot issues. I talked about my plan to begin this day training squeezenet on ImageNet, or at least alexnet to prove it works.

Afterwards, I discovered some issues with the dataset I downloaded:

To solve these problems, I: The first two problems were solved; the last one, as implied by the tone of its solution bullet, was less of a solution and more of a 'I hope this fixes it' because otherwise I will be back at square 1. What made this solution worse is it takes all night to download the file (download speed + bottleneck from writing to a USB) means I will not get to start training squeezenet until earliest Monday which meant I would miss out on two days's time of training. A waste.